Skip to content

Conversation

@dnerini
Copy link
Member

@dnerini dnerini commented Oct 22, 2025

This PR adds support for plotting meteograms within the evaluation framework. It introduces a new plotting pipeline that generates time-series visualizations of station observations and forecast output. The implementation integrates plotting logic with existing workflows and updates dependencies. It also brings in station observations from the PeakWeather dataset for richer observational context.

cosunae and others added 30 commits September 22, 2025 19:20
* Adopt more atomic approach

Also use marimo for interactive editing

* Move notebook to dedicated folder

* Remove original script

* Add example config

* Revert some wrong changes
* Adopt more atomic approach

Also use marimo for interactive editing

* Move notebook to dedicated folder

* add plot rule

* Revert some wrong changes

* Solve bugs adopt grib

* Replace by plot_forecast_frame.py

* Rename

* Fix number of points cosmo 1e

* Remove globe because it doesn't seem right

* Add possibility to set region to none for global plots

* Review

* Update workflow/scripts/src/plotting.py

* Simplify code
init_time=[t.strftime("%Y%m%d%H%M") for t in REFTIMES],
run_id=collect_all_candidates(),
param=["2t"],
sta=["GVE", "KLO", "LUG"],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Parameters and stations should probably be added to the config.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment still stands. I suggest to make parameters and stations configurable for the meteogram. Could also do this for the other showcase plots (i.e. use the same parameters).

@dnerini dnerini changed the base branch from plotting to main November 7, 2025 16:22
@dnerini
Copy link
Member Author

dnerini commented Jan 13, 2026

@MicheleCattaneo now using PeakWeather for plotting station observations, see d1ed4da

@MicheleCattaneo
Copy link

@MicheleCattaneo now using PeakWeather for plotting station observations, see d1ed4da

Cool!
I only gave a quick look so I did not fully understand if you could benefit from it, but I just want to mention that you can extract windows of data as Xarrays: https://peakweather.readthedocs.io/v0.2.0/modules/dataset.html#peakweather.dataset.PeakWeatherDataset.get_windows

Could that help?

@MicheleCattaneo MicheleCattaneo self-requested a review January 13, 2026 16:53
@dnerini dnerini marked this pull request as ready for review January 15, 2026 12:45
Copy link
Contributor

@jonasbhend jonasbhend left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work, thanks Daniele. I have a few questions / comments and I suggest we discuss a bit more how we want to shape evalml going forward. To me what is not clear is how we organize the different commands/flavors (experiment and showcase). So far we have:

  1. experiment: scores and metrics visualized for all forecasts and baselines jointly
  2. showcase: example forecast visualizations (no observations)

The meteograms in this PR sit somewhere in between as they combine one run with one baseline, and I am not sure this is the best way to go. Instead I see two ways forward:

  1. Align meteogram with current showcase, i.e. just a meteogram per run / baseline, no comparison (simple implementation)
  2. Add support for an arbitrary number of runs and baselines in the same multi-panel plot (my favorite, but probably hard(er) to implement)

What do you think: @Louis-Frey @andreaspauling @dnerini @frazane

init_time=[t.strftime("%Y%m%d%H%M") for t in REFTIMES],
run_id=collect_all_candidates(),
param=["2t"],
sta=["GVE", "KLO", "LUG"],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment still stands. I suggest to make parameters and stations configurable for the meteogram. Could also do this for the other showcase plots (i.e. use the same parameters).

"""


rule download_obs_from_peakweather:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be more future-proof to download obs from OGD?

baseline_zarr=lambda wc: _use_first_baseline_zarr(wc),
peakweather_dir=rules.download_obs_from_peakweather.output.peakweather,
output:
OUT_ROOT / "showcases/{run_id}/{init_time}/{init_time}_{param}_{sta}.png",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a concious decision to produce meteogram plots per run_id? I can see that this ties in nicely with the showcases, but in this case, I would argue we do the meteograms only for an individual run (without a baseline). Otherwise, we should strive to support an arbitrary number of run_ids and baselines in the same (multi-panel) plot.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why leave away precipitation?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To what extent is the model output comparable to station observations? (Effective model resolution vs. point-scale observations, elevation difference, ...). Would a later integration with the MEC make sense?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess for most (all) parameters the MEC doesn't do anything more sophisticated than just nearest-neighbour / representative grid point + altitude correction. But true, maybe at least the constant lapse-rate correction would be useful.

Copy link

@Louis-Frey Louis-Frey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this extension of evalml! Looks good to me.

@Louis-Frey Louis-Frey closed this Jan 16, 2026
@Louis-Frey Louis-Frey reopened this Jan 16, 2026
@Louis-Frey
Copy link

Sorry, some misclicks on my side. Still getting used to the interface.

@jonasbhend Yes, I agree that your option 2 is more desirable. Maybe we could go with option 1 for the moment and keep option 2 as an upgrade for later?

Also, I think that integration with the MEC would make sense at some point, so that we would need to re-factor the code anyway then (?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants